356 research outputs found

    Dissociated multi-unit activity and local field potentials: a theory inspired analysis of a motor decision task.

    Get PDF
    Local field potentials (LFP) and multi-unit activity (MUA) recorded in vivo are known to convey different information about the underlying neural activity. Here we extend and support the idea that single-electrode LFP-MUA task-related modulations can shed light on the involved large-scale, multi-modular neural dynamics. We first illustrate a theoretical scheme and associated simulation evidence, proposing that in a multi-modular neural architecture local and distributed dynamic properties can be extracted from the local spiking activity of one pool of neurons in the network. From this new perspective, the spectral features of the field potentials reflect the time structure of the ongoing fluctuations of the probed local neuronal pool on a wide frequency range. We then report results obtained recording from the dorsal premotor (PMd) cortex of monkeys performing a countermanding task, in which a reaching movement is performed, unless a visual stop signal is presented. We find that the LFP and MUA spectral components on a wide frequency band (3-2000 Hz) are very differently modulated in time for successful reaching, successful and wrong stop trials, suggesting an interplay of local and distributed components of the underlying neural activity in different periods of the trials and for different behavioural outcomes. Besides, the MUA spectral power is shown to possess a time-dependent structure, which we suggest could help in understanding the successive involvement of different local neuronal populations. Finally, we compare signals recorded from PMd and dorso-lateral prefrontal (PFCd) cortex in the same experiment, and speculate that the comparative time-dependent spectral analysis of LFP and MUA can help reveal patterns of functional connectivity in the brain

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Population dynamics of interacting spiking neurons

    Get PDF
    A dynamical equation is derived for the spike emission rate nu(t) of a homogeneous network of integrate-and-fire (IF) neurons in a mean-field theoretical framework, where the activity of the single cell depends both on the mean afferent current (the "field") and on its fluctuations. Finite-size effects are taken into account, by a stochastic extension of the dynamical equation for the nu; their effect on the collective activity is studied in detail. Conditions for the local stability of the collective activity are shown to be naturally and simply expressed in terms of (the slope of) the single neuron, static, current-to-rate transfer function. In the framework of the local analysis, we studied the spectral properties of the time-dependent collective activity of the finite network in an asynchronous state; finite-size fluctuations act as an ongoing self-stimulation, which probes the spectral structure of the system on a wide frequency range. The power spectrum of nu exhibits modes ranging from very high frequency (depending on spike transmission delays), which are responsible for instability, to oscillations at a few Hz, direct expression of the diffusion process describing the population dynamics. The latter "diffusion" slow modes do not contribute to the stability conditions. Their characteristic times govern the transient response of the network; these reaction times also exhibit a simple dependence on the slope of the neuron transfer function. We speculate on the possible relevance of our results for the change in the characteristic response time of a neural population during the learning process which shapes the synaptic couplings, thereby affecting the slope of the transfer function. There is remarkable agreement of the theoretical predictions with simulations of a network of IF neurons with a constant leakage term for the membrane potential

    Inferring Synaptic Structure in presence of Neural Interaction Time Scales

    Get PDF
    Biological networks display a variety of activity patterns reflecting a web of interactions that is complex both in space and time. Yet inference methods have mainly focused on reconstructing, from the network's activity, the spatial structure, by assuming equilibrium conditions or, more recently, a probabilistic dynamics with a single arbitrary time-step. Here we show that, under this latter assumption, the inference procedure fails to reconstruct the synaptic matrix of a network of integrate-and-fire neurons when the chosen time scale of interaction does not closely match the synaptic delay or when no single time scale for the interaction can be identified; such failure, moreover, exposes a distinctive bias of the inference method that can lead to infer as inhibitory the excitatory synapses with interaction time scales longer than the model's time-step. We therefore introduce a new two-step method, that first infers through cross-correlation profiles the delay-structure of the network and then reconstructs the synaptic matrix, and successfully test it on networks with different topologies and in different activity regimes. Although step one is able to accurately recover the delay-structure of the network, thus getting rid of any \textit{a priori} guess about the time scales of the interaction, the inference method introduces nonetheless an arbitrary time scale, the time-bin dtdt used to binarize the spike trains. We therefore analytically and numerically study how the choice of dtdt affects the inference in our network model, finding that the relationship between the inferred couplings and the real synaptic efficacies, albeit being quadratic in both cases, depends critically on dtdt for the excitatory synapses only, whilst being basically independent of it for the inhibitory ones

    Learning selective top-down control enhances performance in a visual categorization task.

    Get PDF
    We model the putative neuronal and synaptic mechanisms involved in learning a visual categorization task, taking inspiration from single-cell recordings in inferior temporal cortex (ITC). Our working hypothesis is that learning the categorization task involves both bottom-up, ITC to prefrontal cortex (PFC), and top-down (PFC to ITC) synaptic plasticity and that the latter enhances the selectivity of the ITC neurons encoding the task-relevant features of the stimuli, thereby improving the signal-to-noise ratio. We test this hypothesis by modeling both areas and their connections with spiking neurons and plastic synapses, ITC acting as a feature-selective layer and PFC as a category coding layer. This minimal model gives interesting clues as to properties and function of the selective feedback signal from PFC to ITC that help solving a categorization task. In particular, we show that, when the stimuli are very noisy because of a large number of nonrelevant features, the feedback structure helps getting better categorization performance and decreasing the reaction time. It also affects the speed and stability of the learning process and sharpens tuning curves of ITC neurons. Furthermore, the model predicts a modulation of neural activities during error trials, by which the differential selectivity of ITC neurons to task-relevant and task-irrelevant features diminishes or is even reversed, and modulations in the time course of neural activities that appear when, after learning, corrupted versions of the stimuli are input to the network

    Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis

    Get PDF
    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametri-zation of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.Instituto de Física de Líquidos y Sistemas Biológico

    A Multidimensional Evaluation Approach for the Natural Parks Design

    Get PDF
    The design of a natural park is generated by the need to protect and organize, for conservation and/or for balanced growth, parts of the territory that are of particular interest for the quality of the natural and historical–cultural heritage. The necessary tool to support the decision-making process in the design of a natural park are the financial and economic evaluations, which intervene in three successive steps: in the definition of protection and enhancement levels of the park areas; in the choice of the interventions to be implemented for the realization of these levels of protection and enhancement; in determining and verifying the economic and financial results obtainable from the project execution. This contribution deals with aspects and issues relating to the economic and financial evaluation of natural park projects. In particular, an application of the “Complex Social Value” to a concrete case of environmental design is developed on the basis of the elements that can be deduced from a feasibility study of a natural park: the levels of protection and enhancement of the homogeneous areas of the natural park are preliminarily defined, and the choice of the design alternative to be implemented is, therefore, rationalized with multicriteria analysis

    Maximization of mutual information in a linear noisy network: a detailed study

    Get PDF
    We consider a linear, one-layer feedforward neural network performing a coding task. The goal of the network is to provide a statistical neural representation that conveys as much information as possible on the input stimuli in noisy conditions. We determine the family of synaptic couplings that maximizes the mutual information between input and output distribution. Optimization is performed under different constraints on the synaptic efficacies. We analyse the dependence of the solutions on input and output noises. This work goes beyond previous studies of the same problem in that: (i) we perform a detailed stability analysis in order to find the global maxima of the mutual information; (ii) we examine the properties of the optimal synaptic configurations under different constraints; (iii) and we do not assume translational invariance of the input data, as it is usually done when inputs are assumed to be visual stimuli
    corecore